The goals / steps of this project are the following:
The code referenced in this README file is located at ./project4.ipython notebook. The logic for this section is in code cell #2 of the notebook.
There are two main steps to this process:
Using camera calibration and distortion coefficients returned from the calibration function, I pass them in to the undistort function to undo the distortion for one of the test image. You can find the code in code cell #3 and below is the result.
I used a combination of color (channel S) and directional gradient thresholds to generate a binary image in code cell #4. Here’s an example of my output for this step.
I did perspective transform in code cell #5. I followed the writeup example and used src and dst provided and confirmed that it worked well for my test. I verified it by drawing the src and destination points on the test and warped images. In the warped image the lines are parallel.
src = np.float32(
[[(img_size[0] / 2) - 55, img_size[1] / 2 + 100],
[((img_size[0] / 6) - 10), img_size[1]],
[(img_size[0] * 5 / 6) + 60, img_size[1]],
[(img_size[0] / 2 + 55), img_size[1] / 2 + 100]])
dst = np.float32(
[[(img_size[0] / 4), 0],
[(img_size[0] / 4), img_size[1]],
[(img_size[0] * 3 / 4), img_size[1]],
[(img_size[0] * 3 / 4), 0]])
This resulted in the following source and destination points:
| Source | Destination |
|---|---|
| 585, 460 | 320, 0 |
| 203, 720 | 320, 720 |
| 1127, 720 | 960, 720 |
| 695, 460 | 960, 0 |
The code for this section is in code cell #6 of the notebook. The methods are sliding_window and find_peaks.
I now have a thresholded warped image to map out the lane lines. First I took a histogram along all the columns in the lower half of the image. Then I calculated the the peak of the left and right halves of the histogram. These will be the starting point for the left and right lines. After that I applied sliding window polynomials for each line.
The code for this finding radius of curvature and the position of the vehicle with respect to center is in code cell #7 of the notebook.
These are the steps to find the curvature of the lane line:
To calculate the offset from centre:
The pipeline code to plot the final image back to the road with curvature and offset printed out can be found in code cell #8 and #9 of the notebook. This is what the final image looks like.
Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!)
Here’s a link to my video result
Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?
In building out the pipeline, I followed Udacity instruction courses and the goals/steps from the Writeup template. I built it one step at a time to make sure the result was what I expected before moving on to the next step. Ipython notebook is pretty useful in this because I could quickly visualize my work. I could have move the code out to python script files and run from the command line instead.
These are the areas my pipeline can fail: - Lanes with different colors - Surface colors change dramatically, from asphalt to dirt road - Sharp curves - Driving at night
I would futher work on the color and gradient thresthold step to better detect the lane lines in varying road and lighting coditions. The solution that I currently have doesn’t work too well on the challenge video so I’d like to work on that next.